16 research outputs found

    Contraction Obstructions for Connected Graph Searching

    Full text link
    We consider the connected variant of the classic mixed search game where, in each search step, cleaned edges form a connected subgraph. We consider graph classes with bounded connected (and monotone) mixed search number and we deal with the question whether the obstruction set, with respect of the contraction partial ordering, for those classes is finite. In general, there is no guarantee that those sets are finite, as graphs are not well quasi ordered under the contraction partial ordering relation. In this paper we provide the obstruction set for k=2k=2, where kk is the number of searchers we are allowed to use. This set is finite, it consists of 177 graphs and completely characterises the graphs with connected (and monotone) mixed search number at most 2. Our proof reveals that the "sense of direction" of an optimal search searching is important for connected search which is in contrast to the unconnected original case. We also give a double exponential lower bound on the size of the obstruction set for the classes where this set is finite

    Graphs with monotone connected mixed search number of at most two

    Get PDF
    Graph searching is used to model a variety of problems and has close connections to variations of path-decomposition. This work explores Monotone Connected Mixed Search. Metaphorically, we consider this problem in terms of searchers exploring a network of tunnels and rooms to locate an opponent. In one turn this opponent moves arbitrarily fast while the searchers may only move to adjacent rooms. The objective is, given an arbitrary graph, to determine the minimum number of searchers for which there exist a valid series of moves that searches the graph. We show that the family of graphs requiring at most k searchers is closed under graph contraction. We exploit the close ties between the contraction ordering and the minor ordering to produce a number of structural decomposition techniques and show that there are 172 obstructions in the contraction order for the set of graphs requiring at most two searchers

    Schedule data, not code

    No full text
    Parallel programming is hard and programmers still struggle to write code for shared memory multicore architectures that is both free of concurrency errors and efficient. Tools have advanced, but for tasks that are not embarrassingly parallel, or suitable for a limited model such as map/reduce, there is little help. We aim to address some major aspects of this still underserved area. We construct a model for parallelism, Data not Code (DnC), by starting with the observation that a majority of performance and problems in parallel programming are rooted in the manipulation of data, and that a better approach is to schedule data, not code. Data items don’t exist in a vacuum but are instead organized into collections, so we focus on concurrent access to these collections from both task and data parallel operations. These concepts are already embraced by many programming models and languages, such as map/reduce, GraphLab and SQL. We seek to bring the excellent principles embodied in these models, such as declarative data-centric syntax and the myriad of optimizations that it enables, to conventional programming languages, like C++, making them available in a larger variety of contexts. To make this possible, we define new language constructs and augment proven techniques from databases for accessing arbitrary parts of a collection in a familiar and expressive manner. These not only provide the programmer with constructs that are easy to use and reason about, but simultaneously allow us to better extract and analyze programmer intentions to automatically produce code with complex runtime optimizations. We present Cadmium, a proof of concept DnC language to demonstrate the effectiveness of our model. We implement a variety of programs and show that, without explicit parallel programming, they scale well on multicore architectures. We show performance competitive with, and often superior to, fine-grained locks, the most widely used method of preventing error-inducing data access in parallel operations.Science, Faculty ofComputer Science, Department ofGraduat

    Contraction Obstructions for Connected Graph Searching

    No full text
    International audienceWe consider the connected variant of the classic mixed search game where, in each search step, cleaned edges form a connected subgraph. We consider graph classes with bounded connected monotone mixed search number and we deal with the the question weather the obstruction set, with respect of the contraction partial ordering, for those classes is finite. In general, there is no guarantee that those sets are finite, as graphs are not well quasi ordered under the contraction partial ordering relation. In this paper we provide the obstruction set for k = 2. This set is finite, it consists of 174 graphs and completely characterizes the graphs with connected monotone mixed search number at most 2. Our proof reveals that the "sense of direction" of an optimal search searching is important for connected search which is in contrast to the unconnected original case

    Deconstructing the Overhead in Parallel Applications

    No full text
    Abstract—Performance problems in parallel programs manifest as lack of scalability. These scalability issues are often very difficult to debug. They can stem from synchronization overhead, poor thread scheduling decisions, or contention for hardware resources, such as shared caches. Traditional profiling tools attribute program cycles to different functions, but do not generate immediate insight into issues limiting scalability. Profiling information is very program-specific and is usually processed manually by a human expert in a time-consuming and cumbersome process. Our experience in tuning performance of parallel applications led us to discover that performance tuning can be considerably simplified, and even to some degree automated, if profiling measurements are organized according to several intuitive performance factors common to most parallel programs. In this work we present these factors and propose a hierarchical framework composing them. We present three case studies where analyzing profiling data according to the proposed principle led us to improve performance of three parallel programs by a factor of 6-20×. Our work lays foundation for new ways of organizing and visualizing profiling data in performance tuning tools. I

    Synchronization via scheduling

    No full text
    corecore